Convergence properties of gradient descent noise reduction
نویسندگان
چکیده
منابع مشابه
Convergence properties of gradient descent noise reduction
Gradient descent noise reduction is a technique that attempts to recover the true signal, or trajectory, from noisy observations of a non-linear dynamical system for which the dynamics are known. This paper provides the first rigorous proof that the algorithm will recover the original trajectory for a broad class of dynamical systems under certain conditions. The proof is obtained using ideas f...
متن کاملShadowing Pseudo-Orbits and Gradient Descent Noise Reduction
Shadowing trajectories are one of the most powerful ideas of modern dynamical systems theory, providing a tool for proving some central theorems and a means to assess the relevance of models and numerically computed trajectories of chaotic systems. Shadowing has also been seen to have a role in state estimation and forecasting of nonlinear systems. Shadowing trajectories are guaranteed to exist...
متن کاملConvergence Analysis of Gradient Descent Stochastic Algorithms
This paper proves convergence of a sample-path based stochastic gradient-descent algorithm for optimizing expected-value performance measures in discrete event systems. The algorithm uses increasing precision at successive iterations, and it moves against the direction of a generalized gradient of the computed sample performance function. Two convergence results are established: one, for the ca...
متن کاملOn the Convergence of Decentralized Gradient Descent
Consider the consensus problem of minimizing f(x) = ∑n i=1 fi(x) where each fi is only known to one individual agent i belonging to a connected network of n agents. All the agents shall collaboratively solve this problem and obtain the solution via data exchanges only between neighboring agents. Such algorithms avoid the need of a fusion center, offer better network load balance, and improve da...
متن کاملConvergence of Gradient Descent on Separable Data
The implicit bias of gradient descent is not fully understood even in simple linear classification tasks (e.g., logistic regression). Soudry et al. (2018) studied this bias on separable data, where there are multiple solutions that correctly classify the data. It was found that, when optimizing monotonically decreasing loss functions with exponential tails using gradient descent, the linear cla...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Physica D: Nonlinear Phenomena
سال: 2002
ISSN: 0167-2789
DOI: 10.1016/s0167-2789(02)00376-7